|
Loss of significance is an undesirable effect in calculations using floating-point arithmetic. It occurs when an operation on two numbers increases relative error substantially more than it increases absolute error, for example in subtracting two nearly equal numbers (known as catastrophic cancellation). The effect is that the number of accurate (significant) digits in the result is reduced unacceptably. Ways to avoid this effect are studied in numerical analysis. ==Demonstration of the problem== The effect can be demonstrated with decimal numbers. The following example demonstrates loss of significance for a decimal floating-point data type with 10 significant digits: Consider the decimal number 0.1234567891234567890 A floating-point representation of this number on a machine that keeps 10 floating-point digits would be 0.1234567891 which is fairly close – the difference is very small in comparison with either of the two numbers. Now perform the calculation 0.1234567891234567890 − 0.1234567890 The answer, accurate to 10 significant digits, is 0.0000000001234567890 However, on the 10-digit floating-point machine, the calculation yields 0.1234567891 − 0.1234567890 = 0.0000000001 Whereas the original numbers are accurate in all of the first (most significant) 10 digits, their floating-point difference is only accurate in its first nonzero digit. This amounts to loss of significance. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「loss of significance」の詳細全文を読む スポンサード リンク
|